1,382 research outputs found
Tools and libraries to model and manipulate circular programs
This paper presents techniques to model circular lazy programs in a strict, purely functional setting. Circular lazy programs model any algorithm based on multiple traversals over a recursive data structure as a single traversal function. Such elegant and concise circular programs are defined in a (strict or lazy) functional language and they are transformed into efficient strict and deforested, multiple traversal programs by using attribute grammars-based techniques. Moreover, we use standard slicing techniques to slice such circular lazy programs. We have expressed these transformations as an Haskell library and two tools have been constructed: the HaCirctool that refactors Haskell lazy circular programs into strict ones, and the OCirctool that extends Ocaml with circular definitions allowing programmers to write circular programs in Ocaml notation, which are transformed into strict Ocaml programs before they are executed. The first benchmarks of the different implementations are presented and show that for algorithms relying on a large number of traversals the resulting strict, deforested programs are more efficient than the lazy ones, both in terms of runtime and memory consumption.(undefined
For showing to U.S. and Brazilian people: a memory of bilateral relations through the social network Flickr
This paper investigates the memory of bilateral relations produced by the U.S. Embassy in Brazil on the social network Flickr in the early months of the administration of the Democrat President Barack Obama. The set of images and texts in 19 posts made by the Embassy’s profile on the network in the first half of 2009 is analyzed here. It establishes connections between the production of memory, the U.S. political conjuncture, and U.S. political myths, such as Abraham Lincoln and John Kennedy. It also discusses ways of remembering and forgetting several events in bilateral relations throughout the 20th and 21st centuries, besides the positions published on the social network regarding Brazilian politics and history.Keywords: Social Networks; Flickr; President – United States
Automatic Text Summarization
Writing text was one of the first ever methods used by humans to represent their knowledge.
Text can be of different types and have different purposes.
Due to the evolution of information systems and the Internet, the amount of textual information available has increased exponentially in a worldwide scale, and many documents tend
to have a percentage of unnecessary information. Due to this event, most readers have difficulty in digesting all the extensive information contained in multiple documents, produced
on a daily basis.
A simple solution to the excessive irrelevant information in texts is to create summaries, in
which we keep the subject’s related parts and remove the unnecessary ones.
In Natural Language Processing, the goal of automatic text summarization is to create systems that process text and keep only the most important data. Since its creation several
approaches have been designed to create better text summaries, which can be divided in two
separate groups: extractive approaches and abstractive approaches.
In the first group, the summarizers decide what text elements should be in the summary. The
criteria by which they are selected is diverse. After they are selected, they are combined into
the summary. In the second group, the text elements are generated from scratch. Abstractive
summarizers are much more complex so they still need a lot of research, in order to represent
good results.
During this thesis, we have investigated the state of the art approaches, implemented our
own versions and tested them in conventional datasets, like the DUC dataset.
Our first approach was a frequencybased approach, since it analyses the frequency in which
the text’s words/sentences appear in the text. Higher frequency words/sentences automatically receive higher scores which are then filtered with a compression rate and combined in
a summary.
Moving on to our second approach, we have improved the original TextRank algorithm by
combining it with word embedding vectors. The goal was to represent the text’s sentences as
nodes from a graph and with the help of word embeddings, determine how similar are pairs
of sentences and rank them by their similarity scores. The highest ranking sentences were
filtered with a compression rate and picked for the summary.
In the third approach, we combined feature analysis with deep learning. By analysing certain
characteristics of the text sentences, one can assign scores that represent the importance of
a given sentence for the summary. With these computed values, we have created a dataset
for training a deep neural network that is capable of deciding if a certain sentence must be
or not in the summary.
An abstractive encoderdecoder summarizer was created with the purpose of generating words
related to the document subject and combining them into a summary. Finally, every single
summarizer was combined into a full system.
Each one of our approaches was evaluated with several evaluation metrics, such as ROUGE.
We used the DUC dataset for this purpose and the results were fairly similar to the ones in
the scientific community. As for our encoderdecode, we got promising results.O texto é um dos utensílios mais importantes de transmissão de ideias entre os seres humanos. Pode ser de vários tipos e o seu conteúdo pode ser mais ou menos fácil de interpretar,
conforme a quantidade de informação relevante sobre o assunto principal.
De forma a facilitar o processamento pelo leitor existe um mecanismo propositadamente criado para reduzir a informação irrelevante num texto, chamado sumarização de texto. Através
da sumarização criamse versões reduzidas do text original e mantémse a informação do assunto principal.
Devido à criação e evolução da Internet e outros meios de comunicação, surgiu um aumento
exponencial de documentos textuais, evento denominado de sobrecarga de informação, que
têm na sua maioria informação desnecessária sobre o assunto que retratam.
De forma a resolver este problema global, surgiu dentro da área científica de Processamento
de Linguagem Natural, a sumarização automática de texto, que permite criar sumários automáticos de qualquer tipo de texto e de qualquer lingua, através de algoritmos computacionais.
Desde a sua criação, inúmeras técnicas de sumarização de texto foram idealizadas, podendo
ser classificadas em dois tipos diferentes: extractivas e abstractivas. Em técnicas extractivas,
são transcritos elementos do texto original, como palavras ou frases inteiras que sejam as
mais ilustrativas do assunto do texto e combinadas num documento. Em técnicas abstractivas, os algoritmos geram elementos novos.
Nesta dissertação pesquisaramse, implementaramse e combinaramse algumas das técnicas com melhores resultados de modo a criar um sistema completo para criar sumários.
Relativamente às técnicas implementadas, as primeiras três são técnicas extractivas enquanto
que a ultima é abstractiva. Desta forma, a primeira incide sobre o cálculo das frequências dos
elementos do texto, atribuindose valores às frases que sejam mais frequentes, que por sua
vez são escolhidas para o sumário através de uma taxa de compressão. Outra das técnicas
incide na representação dos elementos textuais sob a forma de nodos de um grafo, sendo
atribuidos valores de similaridade entre os mesmos e de seguida escolhidas as frases com
maiores valores através de uma taxa de compressão. Uma outra abordagem foi criada de
forma a combinar um mecanismo de análise das caracteristicas do texto com métodos baseados em inteligência artificial. Nela cada frase possui um conjunto de caracteristicas que são
usadas para treinar um modelo de rede neuronal. O modelo avalia e decide quais as frases
que devem pertencer ao sumário e filtra as mesmas através deu uma taxa de compressão.
Um sumarizador abstractivo foi criado para para gerar palavras sobre o assunto do texto e
combinar num sumário. Cada um destes sumarizadores foi combinado num só sistema. Por
fim, cada uma das técnicas pode ser avaliada segundo várias métricas de avaliação, como
por exemplo a ROUGE. Segundo os resultados de avaliação das técnicas, com o conjunto de
dados DUC, os nossos sumarizadores obtiveram resultados relativamente parecidos com os
presentes na comunidade cientifica, com especial atenção para o codificadordescodificador
que em certos casos apresentou resultados promissores
Statically analyzing the energy efficiency of software product lines
Optimizing software to become (more) energy efficient is an important concern for the software industry. Although several techniques have been proposed to measure energy consumption within software engineering, little work has specifically addressed Software Product Lines (SPLs). SPLs are a widely used software development approach, where the core concept is to study the systematic development of products that can be deployed in a variable way, e.g., to include different features for different clients. The traditional approach for measuring energy consumption in SPLs is to generate and individually measure all products, which, given their large number, is impractical. We present a technique, implemented in a tool, to statically estimate the worst-case energy consumption for SPLs. The goal is to reason about energy consumption in all products of a SPL, without having to individually analyze each product. Our technique combines static analysis and worst-case prediction with energy consumption analysis, in order to analyze products in a feature-sensitive manner: a feature that is used in several products is analyzed only once, while the energy consumption is estimated once per product. This paper describes not only our previous work on worst-case prediction, for comprehensibility, but also a significant extension of such work. This extension has been realized in two different axis: firstly, we incorporated in our methodology a simulated annealing algorithm to improve our worst-case energy consumption estimation. Secondly, we evaluated our new approach in four real-world SPLs, containing a total of 99 software products. Our new results show that our technique is able to estimate the worst-case energy consumption with a mean error percentage of 17.3% and standard deviation of 11.2%.This paper acknowledges the support of the Erasmus+ Key Action 2 (Strategic partnership
for higher education) project No. 2020-1-PT01-KA203-078646: SusTrainable-Promoting Sustainability
as a Fundamental Driver in Software Development Training and Education
Multiple intermediate structure deforestation by shortcut fusion
Shortcut fusion is a well-known optimization technique for functional programs. Its aim is to transform multi-pass algorithms into single pass ones, achieving deforestation of the intermediate structures that multi-pass algorithms need to construct. Shortcut fusion has already been extended in several ways. It can be applied to monadic programs, maintaining the global effects, and also to obtain circular and higher-order programs. The techniques proposed so far, however, only consider programs defined as the composition of a single producer with a single consumer. In this paper, we analyse shortcut fusion laws to deal with programs consisting of an arbitrary number of function compositions. (C) 2016 Elsevier B.V. All rights reserved.We would like to thank the anonymous reviewers for their detailed and helpful comments. This work was partially funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within projects FCOMP-01-0124-FEDER-020532 and FCOMP-01-0124-FEDER-022701
Shortcut fusion rules for the derivation of circular and higher-order monadic programs
Functional programs often combine separate parts using intermediate data structures for communicating results. Programs so defined are modular, easier to understand and maintain, but suffer from inefficiencies due to the generation of those gluing data structures. To eliminate such redundant data structures, some program transformation techniques have been proposed. One such technique is shortcut fusion, and has been studied in the context of both pure and monadic functional programs.
In this paper, we study several shortcut fusion extensions, so that, alternatively, circular or higher-order programs are derived. These extensions are also provided for effect-free programs and monadic ones. Our work results in a set of generic calculation rules, that are widely applicable, and whose correctness is formally established.(undefined
Multiple intermediate structure deforestation by shortcut fusion
Lecture Notes in Computer Science Volume 8129, 2013.Shortcut fusion is a well-known optimization technique for functional programs. Its aim is to transform multi-pass algorithms into single pass ones, achieving deforestation of the intermediate structures that multi-pass algorithms need to construct. Shortcut fusion has already been extended in several ways. It can be applied to monadic programs, maintaining the global effects, and also to obtain circular and higher-order programs. The techniques proposed so far, however, only consider programs defined as the composition of a single producer with a single consumer. In this paper, we analyse shortcut fusion laws to deal with programs consisting of an arbitrary number of function compositions.FCT -Fundação para a Ciência e a Tecnologia(FCOMP-01-0124-FEDER-022701
A MULTI-CRITERIA APPROACH FOR IRRIGATION WATER MANAGEMENT
The major implications that the European Union (EU) Water Framework Directive
(WFD) may have in irrigated agriculture were analysed using alternative water policy
measures.
The consequences of policy change were evaluated in a case study (Baixo Alentejo,
Portugal), using a Multi-Criteria Decision Making (MCDM) model that simulates
farmers’ preferred behaviour. The study compares the effects of water pricing
(volumetric and flat tariffs) and consumption quotas, in farmer’s income, water agency
revenues, agricultural employment and water demand for irrigation.
Model results indicate that the adjustments in farmer’s responses are dependent on
the policy strategy enforced and on the policy level
Economics of energy storage in a residential consumer context
With the increase of electricity tariffs and thedecreasing costs for distributed generation technologies, moreand more residential consumers are deploying local generationsystems to satisfy their electricity demand in order to reduceoverall cost. Typically, however, a mismatch between electricitygeneration and demand remains. Storage systems enableconsumers to reduce this mismatch by storing locally generatedelectricity for later consumption, instead of feeding excessgeneration into the grid. This paper analyzes the economics ofstorage installations in a residential consumer context. A linearprogram is presented to determine the optimal dispatch, andSimulated Annealing is used to identify the cost minimizingsystem configuration. The developed approach is tested for amulti-family house in Germany
- …